posterior uncertainty
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.15)
- North America > United States (0.14)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > Middle East > Jordan (0.04)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.15)
- North America > United States > New York (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > Middle East > Jordan (0.05)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
Learning Physics between Digital Twins with Low-Fidelity Models and Physics-Informed Gaussian Processes
Spitieris, Michail, Steinsland, Ingelin
A digital twin is a computer model that represents an individual, for example, a component, a patient or a process. In many situations, we want to gain knowledge about an individual from its data while incorporating imperfect physical knowledge and also learn from data from other individuals. In this paper, we introduce a fully Bayesian methodology for learning between digital twins in a setting where the physical parameters of each individual are of interest. A model discrepancy term is incorporated in the model formulation of each personalized model to account for the missing physics of the low-fidelity model. To allow sharing of information between individuals, we introduce a Bayesian Hierarchical modelling framework where the individual models are connected through a new level in the hierarchy. Our methodology is demonstrated in two case studies, a toy example previously used in the literature extended to more individuals and a cardiovascular model relevant for the treatment of Hypertension. The case studies show that 1) models not accounting for imperfect physical models are biased and over-confident, 2) the models accounting for imperfect physical models are more uncertain but cover the truth, 3) the models learning between digital twins have less uncertainty than the corresponding independent individual models, but are not over-confident.
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models (0.46)
Efficient identification of informative features in simulation-based inference
Beck, Jonas, Deistler, Michael, Bernaerts, Yves, Macke, Jakob, Berens, Philipp
Simulation-based Bayesian inference (SBI) can be used to estimate the parameters of complex mechanistic models given observed model outputs without requiring access to explicit likelihood evaluations. A prime example for the application of SBI in neuroscience involves estimating the parameters governing the response dynamics of Hodgkin-Huxley (HH) models from electrophysiological measurements, by inferring a posterior over the parameters that is consistent with a set of observations. To this end, many SBI methods employ a set of summary statistics or scientifically interpretable features to estimate a surrogate likelihood or posterior. However, currently, there is no way to identify how much each summary statistic or feature contributes to reducing posterior uncertainty. To address this challenge, one could simply compare the posteriors with and without a given feature included in the inference process. However, for large or nested feature sets, this would necessitate repeatedly estimating the posterior, which is computationally expensive or even prohibitive. Here, we provide a more efficient approach based on the SBI method neural likelihood estimation (NLE): We show that one can marginalize the trained surrogate likelihood post-hoc before inferring the posterior to assess the contribution of a feature. We demonstrate the usefulness of our method by identifying the most important features for inferring parameters of an example HH neuron model. Beyond neuroscience, our method is generally applicable to SBI workflows that rely on data features for inference used in other scientific fields.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.15)
- North America > United States > New York (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > Middle East > Jordan (0.04)
Location Trace Privacy Under Conditional Priors
Meehan, Casey, Chaudhuri, Kamalika
Providing meaningful privacy to users of location based services is particularly challenging when multiple locations are revealed in a short period of time. This is primarily due to the tremendous degree of dependence that can be anticipated between points. We propose a R\'enyi divergence based privacy framework for bounding expected privacy loss for conditionally dependent data. Additionally, we demonstrate an algorithm for achieving this privacy under Gaussian process conditional priors. This framework both exemplifies why conditionally dependent data is so challenging to protect and offers a strategy for preserving privacy to within a fixed radius for sensitive locations in a user's trace.
- North America > United States > California > San Diego County > San Diego (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > United States > New York > Richmond County > New York City (0.04)
- (4 more...)
Assessing the Robustness of Bayesian Dark Knowledge to Posterior Uncertainty
Vadera, Meet P., Marlin, Benjamin M.
Further, the authors show that this approach is into a single network representing the posterior successful in the classification setting using a student network predictive distribution. Further, the authors whose architecture matches that of a single network show that this approach is successful in the classification in the teacher ensemble. The Bayesian Dark Knowledge setting using a student network whose architecture method also uses online learning of the student model based matches that of a single network in the on single samples from the parameter posterior, resulting in teacher ensemble. In this work, we examine the a training algorithm that requires only twice as much space robustness of Bayesian Dark Knowledge to higher as a standard point estimate-based learning procedure.
- Asia > Middle East > Jordan (0.04)
- North America > United States > Massachusetts > Hampshire County > Amherst (0.04)
A Bayesian Approach to Robust Reinforcement Learning
Derman, Esther, Mankowitz, Daniel, Mann, Timothy, Mannor, Shie
Robust Markov Decision Processes (RMDPs) intend to ensure robustness with respect to changing or adversarial system behavior. In this framework, transitions are modeled as arbitrary elements of a known and properly structured uncertainty set and a robust optimal policy can be derived under the worst-case scenario. In this study, we address the issue of learning in RMDPs using a Bayesian approach. We introduce the Uncertainty Robust Bellman Equation (URBE) which encourages safe exploration for adapting the uncertainty set to new observations while preserving robustness. We propose a URBE-based algorithm, DQN-URBE, that scales this method to higher dimensional domains. Our experiments show that the derived URBE-based strategy leads to a better trade-off between less conservative solutions and robustness in the presence of model misspecification. In addition, we show that the DQN-URBE algorithm can adapt significantly faster to changing dynamics online compared to existing robust techniques with fixed uncertainty sets.
Exploring Uncertainty Measures for Image-Caption Embedding-and-Retrieval Task
Hama, Kenta, Matsubara, Takashi, Uehara, Kuniaki, Cai, Jianfei
With the wide development of black-box machine learning algorithms, particularly deep neural network (DNN), the practical demand for the reliability assessment is rapidly rising. On the basis of the concept that `Bayesian deep learning knows what it does not know,' the uncertainty of DNN outputs has been investigated as a reliability measure for the classification and regression tasks. However, in the image-caption retrieval task, well-known samples are not always easy-to-retrieve samples. This study investigates two aspects of image-caption embedding-and-retrieval systems. On one hand, we quantify feature uncertainty by considering image-caption embedding as a regression task, and use it for model averaging, which can improve the retrieval performance. On the other hand, we further quantify posterior uncertainty by considering the retrieval as a classification task, and use it as a reliability measure, which can greatly improve the retrieval performance by rejecting uncertain queries. The consistent performance of two uncertainty measures is observed with different datasets (MS COCO and Flickr30k), different deep learning architectures (dropout and batch normalization), and different similarity functions.
- North America > United States > New York > New York County > New York City (0.14)
- Asia > Japan > Honshū > Kansai > Hyogo Prefecture > Kobe (0.04)
- Asia > Singapore (0.04)
Variational Inference via $\chi$ Upper Bound Minimization
Dieng, Adji Bousso, Tran, Dustin, Ranganath, Rajesh, Paisley, John, Blei, David
Variational inference (VI) is widely used as an efficient alternative to Markov chain Monte Carlo. It posits a family of approximating distributions $q$ and finds the closest member to the exact posterior $p$. Closeness is usually measured via a divergence $D(q || p)$ from $q$ to $p$. While successful, this approach also has problems. Notably, it typically leads to underestimation of the posterior variance. In this paper we propose CHIVI, a black-box variational inference algorithm that minimizes $D_{\chi}(p || q)$, the $\chi$-divergence from $p$ to $q$. CHIVI minimizes an upper bound of the model evidence, which we term the $\chi$ upper bound (CUBO). Minimizing the CUBO leads to improved posterior uncertainty, and it can also be used with the classical VI lower bound (ELBO) to provide a sandwich estimate of the model evidence. We study CHIVI on three models: probit regression, Gaussian process classification, and a Cox process model of basketball plays. When compared to expectation propagation and classical VI, CHIVI produces better error rates and more accurate estimates of posterior variance.
- Asia > Middle East > Jordan (0.05)
- North America > United States > California > Los Angeles County > Long Beach (0.04)